
A boom in AI adoption is expanding the attack surface for malicious actors.
According to IDC*, worldwide AI spending is estimated to rise to almost $750bn by 2028. This dollar value, which illustrates adoption and growth, can also be viewed as a direct representation of the scale of new data security challenges, with cybersecurity professionals tasked with managing the risks of AI left grappling with a loss of both visibility and control over company data.
Like with cloud adoption a decade ago, the threats are not limited to one particular behaviour or implementation model.
The use of “shadow AI”(or unmanaged AI tool use) is a familiar risk vector, with employees swapping compliance for convenience. Nearly half (47%) of generative AI users are using personal AI apps, according to the Netskope Cloud and Threat Report 2026. This unofficial usage is problematic because employees could unknowingly be sharing sensitive data with third-party cloud applications, and visibility and controls can be challenging for many organizations.
But what about organizations building their own private AI models (often directly motivated by an effort to increase security)? Here there are many broader risks, including responsibility for securing both training data and the outputs of a model.
And the latest boom area for AI is agentic, with autonomous agents carrying out tasks independently (with little human oversight). a boon to productivity, but raising issues around data access and control. By 2028, 25% of enterprise breaches will be tied to AI agent abuse, according to a Gartner prediction.
With these trends in mind, let’s explore some security considerations which serve as a useful checklist to ensure your organization is tackling the risks from all angles.
1) Use zero trust as the AI security foundation
Securing AI is a unique data security challenge because of the way in which AI models process inputs and generate outputs. Extending a zero trust security framework into AI ensures that every request (by every human, model, and agent) is verified, every data flow is monitored, and access is granted based on dynamic risk assessments rather than static permissions. Research shows that two-thirds of organizations say their zero trust controls simply can’t secure non-human identities (NHI) currently, but 78% expect NHI growth to outpace human identity growth within 12 months: A looming risk. Consider how you will build AI security into your existing security frameworks and architectures and resist the lure of bolt-on point products that add complexity to your stack and are resource intensive. Look for unified platforms that benefit from security policy and administration integrations to reduce administrative burden.
2) Hone your omniscience in the face of AI omnipresence (get yourself an all-seeing-eye)
Only 6% of organizations have complete visibility into AI use, and we all know that security teams can’t secure what they can’t see—a challenge made worse when the lines between personal and corporate tool use are blurring. Without visibility into how and where AI tools are being used, an organization becomes exposed to potential data leakage. Make tools that provide visibility a top priority and starting point for AI security.
Your AI security strategy needs to include more than just new AI apps and models. Every SaaS app has been busily introducing AI enhancements over the past few years, and while they were much hyped at the start, these AI-powered updates are becoming the norm. This means that every day one of your approved SaaS tools could introduce new AI-powered functionality that may not meet your security requirements. These under-the-radar updates are often happening without any visibility for security teams who should be assessing risk before approving. Manual monitoring and administration won’t cut it here, so use the same approach to AI risk management as you would to cloud risk. The Netskope Cloud Confidence Index (CCI) provides real-time insights into more than 83,000 cloud and SaaS applications, and risk evaluation for more than 10,000 public MCP servers, identifying risky attributes, authentication types, and protocol versions before deployment. Outsource the leg work!
3) Mind the AI governance gap
Two-thirds of organizations rate their AI governance as only reactive or developing, a third describe fragmented adoption and 38% wish they had started governance before adoption scaled. Without clear guardrails, companies using AI are opening the door for security threats. Avoid joining the 38% and set out a framework of policies to ensure proper AI use (including both pre- and post-deployment).
4) Uplevel skills AND tooling
Almost three-quarters (73%) of people in the UK have had no AI training or education, yet 31% of organizations are relying on written policies and employee compliance as their primary enforcement method for AI security—basically an honor system shrouded in mystery. The disparity creates risks of improper tool use and almost inevitable leaves organizations falling foul of compliance issues, particularly in highly regulated industries.
Invest in training and tools to help support proper AI interactions, ensuring everyone in the organization is clear on the risks, approved policies for safe AI use, and the regulations they need to be aware of. But do not stop here. Even the most highly trained workforce needs help, and Gartner predicts that, by 2028, at least 15% of daily business decisions will be made autonomously through agentic AI, up from 0% in 2024. Human skills in AI security are like driver’s licences; something that can systematically reduce risk, but is best deployed alongside risk management tools like seatbelts and ABS breaking—and something that may become increasingly irrelevant when driverless cars hit the road. Invest in securing AI models and the corporate data that they interact with, recognising where new tools (such as AI security gateways and guardrails) are needed alongside existing best practice for supporting human users to make secure decisions.
Security in the AI Fast Lane
Netskope enables its customers to adopt AI at enterprise scale without expanding risk. We secure users, agents, applications, and data across public, private, and agentic AI with unified protection, from pre-deployment model hardening to runtime threat prevention. Netskope One AI Security is the only platform that secures every AI interaction while maintaining the speed and experience that your teams demand.
Move fast. Stay protected. Maintain total control.
To learn more about securing AI across the enterprise, visit netskope.com/ai.
*IDC Market Forecast, Worldwide Artificial Intelligence IT Spending Forecast, 2024–2028, Rick Villars et al., October 2024, Doc #US52635424

Read the blog